Usability evaluations are good for a lot of things, but determining what a team's priorities should be is not one of them. Fortunately, there is an explanation for these counterintuitive outcomes that can help us choose a more appropriate evaluation course.
Right questions, wrong people, and vice versa
First, different teams get different results because tests and research are often performed poorly: teams either ask the right questions of the wrong people or ask the wrong questions of the right people.
In one recent case, the project goal was to improve usability for a site's new users. A card-sorting session�a perfectly appropriate discovery method for planning information architecture changes�revealed that the existing, less-than-ideal terminology used throughout the site should be retained. This happened because the team ran the card-sort with existing site users instead of the new users it aimed to entice.
In another case, a team charged with improving the usability of a web application clearly in need of an overhaul ran usability tests to identify major problems. In the end, they determined that the rather poorly-designed existing task flows should not only be kept, but featured. This team, too, ran its tests with existing users, who had�as one might guess�become quite proficient at navigating the inadequate interaction model.